Agents that Argue and Explain Classifications of Retinal Conditions

نویسندگان

چکیده

Expertise for auditing AI systems in medical domain is only now being accumulated. Conformity assessment procedures will require systems: (1) to be transparent, (2) not rely decisions solely on algorithms, or (3) include safety assurance cases the documentation facilitate technical audit. We are interested here obtaining transparency case of machine learning (ML) applied classification retina conditions. High performance metrics achieved using ML has become common practice. However, domain, algorithmic need sustained by explanations. aim at building a support tool ophthalmologists able to: (i) explain decision human agent automatically extracting rules from learned models; (ii) ophthalmologist loop formalising expert and including knowledge argumentation machinery; (iii) build creating argument patterns each diagnosis. For task, we used dataset consisting 699 OCT images: 126 Normal class, 210 with Diabetic Retinopathy (DR) 363 Age Related Macular Degeneration (AMD). The contains patients Ophthalmology Department County Emergency Hospital Cluj-Napoca. All ethical norms procedures, anonymisation, have been performed. three algorithms: tree (DT), vector (SVM) artificial neural network (ANN). algorithm extract diagnosis rules. knowledge, relied normative (Invernizzi et al. Ophthalmol Retina 2(8):808–815, 2018). arguing between agents, Jason multi-agent platform. assume different base reasoning capabilities agent. agents their own optical coherence tomography (OCT) images which they apply distinct algorithm. model With rules, engage an argumentative process. resolution debate outputs that then explained ophthalmologist, means cases. diagnosing condition, our solution deals following issues: first, models translated into These explanation tracing chain supporting Hence, proposed complies requirement “algorithmic should agent”. Second, based ML-algorithms. architecture includes knowledge. taken exchanging arguments ML-based algorithms conflict among verbalised, so can supervise Third, generated structure evidence various goals such as: methodology, transparency, data quality. dimension, auditor check provided against current best practices standards. developed system conditions goes behind most software focuses metrics. Our approach helps approve domain. Interleaving extracted ML-models step towards balancing benefits explainability, aiming engineering reliable applications.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Argumentation Tool that Enables Agents to Argue

Multi-Agent Systems are suitable to provide a framework that allows to perform collaborative processes in distributed environments. Furthermore, argumentation is a natural way of reaching agreements between several parties. We propose an infrastructure to develop and execute argumentative agents in an open MAS. It offers the tools to develop agents with argumentation capabilities. It also offer...

متن کامل

Building Agents that Plan and Argue in a Social Context

In order for one agent to meet its goals, it will often need to influence another to act on its behalf, particularly in a society in which agents have heterogenous sets of abilities. To effect such influence, it is necessary to consider both the social context and the dialogical context in which influence is exerted, typically through utterance. Both of these facets, the social and the dialogic...

متن کامل

Agents That Explain Their Own Actions

Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the moti...

متن کامل

Agents that Learn to Explain Themselves

Intelligent artificial agents need to be able to explain and justify their actions. They must therefore understand the rationales for their own actions. This paper describes a technique for acquiring this understanding, implemented in a multimedia explanation system. The system determines the motivation for a decision by recalling the situation in which the decision was made, and replaying the ...

متن کامل

A comparative study of the organization of the Abbasid call and that of the Imam’s agents (aims and principles)

The organizations of the Abbasid call and the Imam’s agents were two covert systems active in different regions of the Islamic territories. A comparative study of the two organizations helps for a better judgment as to the practice of the holy Imams and the Shiite versus that of the Abbasid and their followers. The main question of this project is “What are the similarities and differences of t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Medical and Biological Engineering

سال: 2021

ISSN: ['1609-0985', '2199-4757']

DOI: https://doi.org/10.1007/s40846-021-00647-7